The renewed interest from the scientific community in machine learning (ML) is opening many new areas of research. Here we focus on how novel trends in ML are providing opportunities to improve the field of computational fluid dynamics (CFD). In particular, we discuss synergies between ML and CFD that have already shown benefits, and we also assess areas that are under development and may produce important benefits in the coming years. We believe that it is also important to emphasize a balanced perspective of cautious optimism for these emerging approaches
translated by 谷歌翻译
自从Navier Stokes方程的推导以来,已经有可能在数值上解决现实世界的粘性流问题(计算流体动力学(CFD))。然而,尽管中央处理单元(CPU)的性能取得了迅速的进步,但模拟瞬态流量的计算成本非常小,时间/网格量表物理学仍然是不现实的。近年来,机器学习(ML)技术在整个行业中都受到了极大的关注,这一大浪潮已经传播了流体动力学界的各种兴趣。最近的ML CFD研究表明,随着数据驱动方法的训练时间和预测时间之间的间隔增加,完全抑制了误差的增加是不现实的。应用ML的实用CFD加速方法的开发是剩余的问题。因此,这项研究的目标是根据物理信息传递学习制定现实的ML策略,并使用不稳定的CFD数据集验证了该策略的准确性和加速性能。该策略可以在监视跨耦合计算框架中管理方程的残差时确定转移学习的时间。因此,我们的假设是可行的,即连续流体流动时间序列的预测是可行的,因为中间CFD模拟定期不仅减少了增加残差,还可以更新网络参数。值得注意的是,具有基于网格的网络模型的交叉耦合策略不会损害计算加速度的仿真精度。在层流逆流CFD数据集条件下,该模拟加速了1.8次,包括参数更新时间。此可行性研究使用了开源CFD软件OpenFOAM和开源ML软件TensorFlow。
translated by 谷歌翻译
城市环境的可持续性是一个日益相关的问题。空气污染在环境的退化中发挥着关键作用,以及暴露于它的公民的健康。在本章中,我们提供了对模型空气污染的方法的审查,重点是机器学习方法的应用。事实上,已经证明了机器学习方法,以提高传统空气污染方法的准确性,同时限制了模型的开发成本。机器学习工具开辟了研究空气污染的新方法,例如流动动力学建模或遥感方法。
translated by 谷歌翻译
机器学习正迅速成为科学计算的核心技术,并有许多机会推进计算流体动力学领域。从这个角度来看,我们强调了一些潜在影响最高的领域,包括加速直接数值模拟,以改善湍流闭合建模,并开发增强的减少订单模型。我们还讨论了机器学习的新兴领域,这对于计算流体动力学以及应考虑的一些潜在局限性是有希望的。
translated by 谷歌翻译
我们使用高斯随机重量平均(赃物)来评估与基于神经网络的功能近似相关的模型不确定性与流体流有关。赃物在给定训练数据和恒定学习率的情况下近似每个重量的后高斯分布。有了访问此分布,它能够创建具有各种采样权重组合的多个模型,可用于获得集合预测。这种合奏的平均值可以视为“平均估计”,而其标准偏差则可以用于构建“置信区间”,这使我们能够在神经网络的训练过程中执行不确定性定量(UQ)。我们在以下情况下利用代表性的基于神经网络的功能近似任务:(i)二维圆形缸唤醒; (ii)Daymet数据集(北美的最高每日温度); (iii)三维方缸唤醒; (iv)城市流程,以评估当前思想在各种复杂数据集中的普遍性。无论网络体系结构如何,都可以应用基于赃物的UQ,因此,我们证明了该方法对两种类型的神经网络的适用性:(i)通过结合卷积神经网络(CNN)和Multi-i-Encompruction。图层感知器(MLP); (ii)来自具有二维CNN的截面数据的远场状态估计。我们发现,赃物可以从模型形式不确定性的角度获得物理上介入的置信区间估计。该能力支持其用于科学和工程方面的各种问题。
translated by 谷歌翻译
城市地区不仅是造成气候变化的最大贡献者之一,而且它们是人口众多的最脆弱的地区之一,他们将共同经历负面影响。在本文中,我们解决了卫星遥感成像和人工智能(AI)带来的一些机会,以自动衡量城市的气候适应。我们提出了一个结合AI和仿真的框架,该框架可能对从遥感图像中提取指标有用,并可能有助于对这些气候适应相关指标的未来状态进行预测性估计。当这样的模型变得更加强大并在现实生活中使用时,它们可能会帮助决策者和早期响应者选择最佳行动来维持社会,自然资源和生物多样性的福祉。我们强调了这是许多科学家的开放式和正在进行的研究领域,因此我们对数据驱动方法的挑战和局限性以及一般的预测估计模型提供了深入的讨论。
translated by 谷歌翻译
数字革命带来了技术,行为和真理的道德十字路口。但是,由于数字平台已被用来构建全球混乱和真实性的无知系统,因此需要一个全面和建设性的道德框架。全球系统的不平等结构导致动态变化和系统性问题,这对最脆弱的人产生了更大的影响。仅基于个人级别的道德框架不再足够,因为它们缺乏必要的表达来为新挑战提供解决方案。新的道德愿景必须包括对量表和复杂互连的理解,以及现代社会系统的因果关系。这些系统中的许多系统都在内部脆弱,对外部因素和威胁非常敏感,这导致不道德的情况,这些情况需要仍然以个人为中心的系统性解决方案。此外,多层的网络般的社交组织产生了一组力量,以防止某些社区适当发展。数字技术还影响了个人级别,构成了更加均匀,可预测且最终可控制的人类的风险。为了保护人类的核心和共同真理的愿望,一个新的道德框架必须赋予个人和独特性以及文化异质性,从而解决数字化的负面结果。只有结合以人为本和集体性的数字发展,才有可能构建道德上的新的社会模型和互动。该愿景要求科学使用计算工具来增强道德框架和原理,以支持真实的动作,以转换和配置社会系统的属性。
translated by 谷歌翻译
Credit scoring models are the primary instrument used by financial institutions to manage credit risk. The scarcity of research on behavioral scoring is due to the difficult data access. Financial institutions have to maintain the privacy and security of borrowers' information refrain them from collaborating in research initiatives. In this work, we present a methodology that allows us to evaluate the performance of models trained with synthetic data when they are applied to real-world data. Our results show that synthetic data quality is increasingly poor when the number of attributes increases. However, creditworthiness assessment models trained with synthetic data show a reduction of 3\% of AUC and 6\% of KS when compared with models trained with real data. These results have a significant impact since they encourage credit risk investigation from synthetic data, making it possible to maintain borrowers' privacy and to address problems that until now have been hampered by the availability of information.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Despite the impact of psychiatric disorders on clinical health, early-stage diagnosis remains a challenge. Machine learning studies have shown that classifiers tend to be overly narrow in the diagnosis prediction task. The overlap between conditions leads to high heterogeneity among participants that is not adequately captured by classification models. To address this issue, normative approaches have surged as an alternative method. By using a generative model to learn the distribution of healthy brain data patterns, we can identify the presence of pathologies as deviations or outliers from the distribution learned by the model. In particular, deep generative models showed great results as normative models to identify neurological lesions in the brain. However, unlike most neurological lesions, psychiatric disorders present subtle changes widespread in several brain regions, making these alterations challenging to identify. In this work, we evaluate the performance of transformer-based normative models to detect subtle brain changes expressed in adolescents and young adults. We trained our model on 3D MRI scans of neurotypical individuals (N=1,765). Then, we obtained the likelihood of neurotypical controls and psychiatric patients with early-stage schizophrenia from an independent dataset (N=93) from the Human Connectome Project. Using the predicted likelihood of the scans as a proxy for a normative score, we obtained an AUROC of 0.82 when assessing the difference between controls and individuals with early-stage schizophrenia. Our approach surpassed recent normative methods based on brain age and Gaussian Process, showing the promising use of deep generative models to help in individualised analyses.
translated by 谷歌翻译